Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rebase and fix lora config and file dep bug #28

Merged
merged 112 commits into from
Feb 21, 2025

Conversation

liufengwei0103
Copy link
Collaborator

@liufengwei0103 liufengwei0103 commented Feb 21, 2025

Before submitting

# Install and register `pre-commit` in the project folder
pip install pre-commit && pre-commit install

# Process previous code files separately
pre-commit run --file XXXX.py

PR types

PR changes

Description

1.rebase
2.修复sft文件依赖错误,LoRAAutoConfig错误

xuxinyi389 and others added 30 commits January 8, 2025 16:15
* update emb doc

* update register_sequence_parallel_allreduce_hooks

* update fuse_sequence_parallel_allreduce
* fix auto tokenizer

* fix auto tokenizer

* lint

---------

Co-authored-by: lyuwenyu <[email protected]>
* update deepseek-v2

* add deepseek_v3

* update for deepseekv3

* update deepseekv3 model_ids
* fix loraga merge

* change sign
* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* [AutoParallel]:fix ernine auto_trainer error

* Update run_pretrain_auto.py
* fix sequence parallel

* update register_sequence_parallel_allreduce_hooks

* update fuse_sequence_parallel_allreduce
* [AutoParallel]:fix ci error

* [AutoParallel]:fix ci error
…ePaddle#9787)

* [LLM] support flash device on static model

* [LLM] adapt pdc sdk
* add no_proxy & del paddlenlp_ops

* update timeout for dpo

* fix sequence_parallel

* add timeout

* add Total_Tokens_per_second_per_gpu

* fix Tokens_per_second_per_gpu

* update Total_Tokens_per_second_per_gpu
* mergekit gpu 1226

* merge model gpu

* merge gpu

* add lora model

* change valueerror

* add lora

* gpu test
* [LLM] update llm server dockerfiles

* merge code from fastdeploy
* Update README.md

* Update README.md

* Update README_en.md
@liufengwei0103 liufengwei0103 merged commit 211a493 into blacksheep-Aristotle:auto_sft Feb 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.